51 research outputs found

    IoT Security Vulnerabilities and Predictive Signal Jamming Attack Analysis in LoRaWAN

    Get PDF
    Internet of Things (IoT) gains popularity in recent times due to its flexibility, usability, diverse applicability and ease of deployment. However, the issues related to security is less explored. The IoT devices are light weight in nature and have low computation power, low battery life and low memory. As incorporating security features are resource expensive, IoT devices are often found to be less protected and in recent times, more IoT devices have been routinely attacked due to high profile security flaws. This paper aims to explore the security vulnerabilities of IoT devices particularly that use Low Power Wide Area Networks (LPWANs). In this work, LoRaWAN based IoT security vulnerabilities are scrutinised and loopholes are identified. An attack was designed and simulated with the use of a predictive model of the device data generation. The paper demonstrated that by predicting the data generation model, jamming attack can be carried out to block devices from sending data successfully. This research will aid in the continual development of any necessary countermeasures and mitigations for LoRaWAN and LPWAN functionality of IoT networks in general

    Statistical t+2D subband modelling for crowd counting

    Get PDF
    Counting people automatically in a crowded scenario is important to assess safety and to determine behaviour in surveillance operations. In this paper we propose a new algorithm using the statistics of the spatio-temporal wavelet subbands. A t+2D lifting based wavelet transform is exploited to generate a motion saliency map which is then used to extract novel parametric statical texture features. We compare our approach to existing crowd counting approaches and show improvement on standard benchmark sequences, demonstrating the robustness of the extracted features

    The multimedia blockchain: a distributed and tamper-proof media transaction framework

    Get PDF
    A distributed and tamper proof media transaction framework is proposed based on the blockchain model. Current multimedia distribution does not preserve self-retrievable information of transaction trails or content modification histories. For example, digital copies of valuable artworks, creative media and entertainment contents are distributed for various purposes including exhibitions, gallery collections or in media production workflow. Original media is often edited for creative content preparation or tampered with to fabricate false propaganda over social media. However there is no existing trusted mechanism that can easily retrieve either the transaction trails or the modification histories. We propose a novel watermarking based Multimedia Blockchain framework that can address such issues. The unique watermark information contains two pieces of information: a) a cryptographic hash that contains transaction histories (blockchain transactions log) and b) an image hash that preserves retrievable original media content. Once the watermark is extracted, first part of the watermark is passed to a distributed ledger to retrieve the historical transaction trail and the latter part is used to identify the edited / tampered regions. The paper outlines the requirements, the challenges and demonstrates the proof of this concept

    Quality scalability aware watermarking for visual content

    Get PDF
    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking

    Visual attention-based image watermarking

    Get PDF
    Imperceptibility and robustness are two complementary but fundamental requirements of any watermarking algorithm. Low strength watermarking yields high imperceptibility but exhibits poor robustness. High strength watermarking schemes achieve good robustness but often infuse distortions resulting in poor visual quality in host media. If distortion due to high strength watermarking can avoid visually attentive regions, such distortions are unlikely to be noticeable to any viewer. In this paper, we exploit this concept and propose a novel visual attention-based highly robust image watermarking methodology by embedding lower and higher strength watermarks in visually salient and non-salient regions, respectively. A new low complexity wavelet domain visual attention model is proposed that allows us to design new robust watermarking algorithms. The proposed new saliency model outperforms the state-of-the-art method in joint saliency detection and low computational complexity performances. In evaluating watermarking performances, the proposed blind and non-blind algorithms exhibit increased robustness to various natural image processing and filtering attacks with minimal or no effect on image quality, as verified by both subjective and objective visual quality evaluation. Up to 25% and 40% improvement against JPEG2000 compression and common filtering attacks, respectively, are reported against the existing algorithms that do not use a visual attention model

    Sentiment aware fake news detection on online social networks

    Get PDF
    Messages posted to online social networks (OSNs) causes a recent stir due to the intended spread of fake news or rumor. In this work, we aim to understand and analyse the characteristics of fake news especially in relation to sentiments, to determine the automatic detection of fake news and rumors. Based on empirical observation, we propose a hypothesis that there exists a relation between a fake message/rumour and the sentiment of the texts posted online. We verify our hypothesis by comparing with the state-of-the-art baseline text-only fake news detection methods that do not consider sentiments. We performed experiments on standard Twitter fake news dataset and show good improvements in detecting fake news/rumor

    Semantic Metadata Extraction from Dense Video Captioning

    Full text link
    Annotation of multimedia data by humans is time-consuming and costly, while reliable automatic generation of semantic metadata is a major challenge. We propose a framework to extract semantic metadata from automatically generated video captions. As metadata, we consider entities, the entities' properties, relations between entities, and the video category. We employ two state-of-the-art dense video captioning models with masked transformer (MT) and parallel decoding (PVDC) to generate captions for videos of the ActivityNet Captions dataset. Our experiments show that it is possible to extract entities, their properties, relations between entities, and the video category from the generated captions. We observe that the quality of the extracted information is mainly influenced by the quality of the event localization in the video as well as the performance of the event caption generation

    Global motion compensated visual attention-based video watermarking

    Get PDF
    Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking

    Profile Guided Dataflow Transformation for FPGAs and CPUs

    Get PDF
    This paper proposes a new high-level approach for optimising field programmable gate array (FPGA) designs. FPGA designs are commonly implemented in low-level hardware description languages (HDLs), which lack the abstractions necessary for identifying opportunities for significant performance improvements. Using a computer vision case study, we show that modelling computation with dataflow abstractions enables substantial restructuring of FPGA designs before lowering to the HDL level, and also improve CPU performance. Using the CPU transformations, runtime is reduced by 43 %. Using the FPGA transformations, clock frequency is increased from 67MHz to 110MHz. Our results outperform commercial low-level HDL optimisations, showcasing dataflow program abstraction as an amenable computation model for highly effective FPGA optimisation

    RIPL: An Efficient Image Processing DSL for FPGAs

    Full text link
    Field programmable gate arrays (FPGAs) can accelerate image processing by exploiting fine-grained parallelism opportunities in image operations. FPGA language designs are often subsets or extensions of existing languages, though these typically lack suitable hardware computation models so compiling them to FPGAs leads to inefficient designs. Moreover, these languages lack image processing domain specificity. Our solution is RIPL, an image processing domain specific language (DSL) for FPGAs. It has algorithmic skeletons to express image processing, and these are exploited to generate deep pipelines of highly concurrent and memory-efficient image processing components.Comment: Presented at Second International Workshop on FPGAs for Software Programmers (FSP 2015) (arXiv:1508.06320
    • …
    corecore